24 research outputs found

    Bioportal: Ontologies and integrated data resources at the click of the mouse

    Get PDF
    BioPortal is a Web portal that provides access to a library of biomedical ontologies and terminologies developed in OWL, RDF(S), OBO format, Protégé frames, and Rich Release Format. BioPortal functionality, driven by a service-oriented architecture, includes the ability to browse, search and visualize ontologies (Figure 1). The Web interface also facilitates community-based participation in the evaluation and evolution of ontology content

    National Center for Biomedical Ontology: Advancing biomedicine through structured organization of scientific knowledge

    Get PDF
    The National Center for Biomedical Ontology is a consortium that comprises leading informaticians, biologists, clinicians, and ontologists, funded by the National Institutes of Health (NIH) Roadmap, to develop innovative technology and methods that allow scientists to record, manage, and disseminate biomedical information and knowledge in machine-processable form. The goals of the Center are (1) to help unify the divergent and isolated efforts in ontology development by promoting high quality open-source, standards-based tools to create, manage, and use ontologies, (2) to create new software tools so that scientists can use ontologies to annotate and analyze biomedical data, (3) to provide a national resource for the ongoing evaluation, integration, and evolution of biomedical ontologies and associated tools and theories in the context of driving biomedical projects (DBPs), and (4) to disseminate the tools and resources of the Center and to identify, evaluate, and communicate best practices of ontology development to the biomedical community. Through the research activities within the Center, collaborations with the DBPs, and interactions with the biomedical community, our goal is to help scientists to work more effectively in the e-science paradigm, enhancing experiment design, experiment execution, data analysis, information synthesis, hypothesis generation and testing, and understand human disease

    The Levo

    Get PDF
    The Levo is a performance tracking barbell attachment that will enhance and aid an athlete\u27s ability to understand their performance in the weight room. By tracking the movement of the barbell, several useful performance measurements can be extracted, analyzed and communicated to the athlete in a meaningful way. This document will state all requirements, their explanations, and expectations for the product at the alpha phase proof of concept, and beta phase prototype. Furthermore, it will also include the test plan, sustainability, safety, and engineering standards used to create the Levo

    CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison

    Full text link
    Large, labeled datasets have driven deep learning methods to achieve expert-level performance on a variety of medical imaging tasks. We present CheXpert, a large dataset that contains 224,316 chest radiographs of 65,240 patients. We design a labeler to automatically detect the presence of 14 observations in radiology reports, capturing uncertainties inherent in radiograph interpretation. We investigate different approaches to using the uncertainty labels for training convolutional neural networks that output the probability of these observations given the available frontal and lateral radiographs. On a validation set of 200 chest radiographic studies which were manually annotated by 3 board-certified radiologists, we find that different uncertainty approaches are useful for different pathologies. We then evaluate our best model on a test set composed of 500 chest radiographic studies annotated by a consensus of 5 board-certified radiologists, and compare the performance of our model to that of 3 additional radiologists in the detection of 5 selected pathologies. On Cardiomegaly, Edema, and Pleural Effusion, the model ROC and PR curves lie above all 3 radiologist operating points. We release the dataset to the public as a standard benchmark to evaluate performance of chest radiograph interpretation models. The dataset is freely available at https://stanfordmlgroup.github.io/competitions/chexpert .Comment: Published in AAAI 201

    A Simple Standard for Sharing Ontological Mappings (SSSOM).

    Get PDF
    Despite progress in the development of standards for describing and exchanging scientific information, the lack of easy-to-use standards for mapping between different representations of the same or similar objects in different databases poses a major impediment to data integration and interoperability. Mappings often lack the metadata needed to be correctly interpreted and applied. For example, are two terms equivalent or merely related? Are they narrow or broad matches? Or are they associated in some other way? Such relationships between the mapped terms are often not documented, which leads to incorrect assumptions and makes them hard to use in scenarios that require a high degree of precision (such as diagnostics or risk prediction). Furthermore, the lack of descriptions of how mappings were done makes it hard to combine and reconcile mappings, particularly curated and automated ones. We have developed the Simple Standard for Sharing Ontological Mappings (SSSOM) which addresses these problems by: (i) Introducing a machine-readable and extensible vocabulary to describe metadata that makes imprecision, inaccuracy and incompleteness in mappings explicit. (ii) Defining an easy-to-use simple table-based format that can be integrated into existing data science pipelines without the need to parse or query ontologies, and that integrates seamlessly with Linked Data principles. (iii) Implementing open and community-driven collaborative workflows that are designed to evolve the standard continuously to address changing requirements and mapping practices. (iv) Providing reference tools and software libraries for working with the standard. In this paper, we present the SSSOM standard, describe several use cases in detail and survey some of the existing work on standardizing the exchange of mappings, with the goal of making mappings Findable, Accessible, Interoperable and Reusable (FAIR). The SSSOM specification can be found at http://w3id.org/sssom/spec. Database URL: http://w3id.org/sssom/spec

    De-black-boxing health AI: demonstrating reproducible machine learning computable phenotypes using the N3C-RECOVER Long COVID model in the All of Us data repository

    Get PDF
    Machine learning (ML)-driven computable phenotypes are among the most challenging to share and reproduce. Despite this difficulty, the urgent public health considerations around Long COVID make it especially important to ensure the rigor and reproducibility of Long COVID phenotyping algorithms such that they can be made available to a broad audience of researchers. As part of the NIH Researching COVID to Enhance Recovery (RECOVER) Initiative, researchers with the National COVID Cohort Collaborative (N3C) devised and trained an ML-based phenotype to identify patients highly probable to have Long COVID. Supported by RECOVER, N3C and NIH’s All of Us study partnered to reproduce the output of N3C’s trained model in the All of Us data enclave, demonstrating model extensibility in multiple environments. This case study in ML-based phenotype reuse illustrates how open-source software best practices and cross-site collaboration can de-black-box phenotyping algorithms, prevent unnecessary rework, and promote open science in informatics

    Renal shielding and dosimetry for patients with severe systemic sclerosis receiving immunoablation with total body irradiation in the scleroderma: cyclophosphamide or transplantation trial.

    No full text
    PURPOSE: To describe renal shielding techniques and dosimetry in delivering total body irradiation (TBI) to patients with severe systemic sclerosis (SSc) enrolled in a hematopoietic stem cell transplant protocol. METHODS AND MATERIALS: The Scleroderma: Cyclophosphamide or Transplantation (SCOT) protocol uses a lymphoablative preparative regimen including 800 cGy TBI delivered in two 200-cGy fractions twice a day before CD34(+) selected autologous hematopoietic stem cell transplantation. Lung and kidney doses are limited to 200 cGy to protect organs damaged by SSc. Kidney block proximity to the spinal cord was investigated, and guidelines were developed for acceptable lumbar area TBI dosing. Information about kidney size and the organ shifts from supine to standing positions were recorded using diagnostic ultrasound (US). Minimum distance between the kidney blocks (dkB) and the lumbar spine region dose was recorded, and in vivo dosimetry was performed at several locations to determine the radiation doses delivered. RESULTS: Eleven patients were treated at our center with an anteroposterior (AP)/posteroanterior (PA) TBI technique. A 10% to 20% dose inhomogeneity in the lumbar spine region was achieved with a minimum kidney block separation of 4 to 5 cm. The average lumbar spine dose was 179.6 ± 18.1 cGy, with an average dkB of 5.0 ± 1.0 cm. Kidney block shield design was accomplished using a combination of US and noncontrast computerized tomography (CT) or CT imaging alone. The renal US revealed a wide range of kidney displacement from upright to supine positions. Overall, the average in vivo dose for the kidney prescription point was 193.4 ± 5.1 cGy. CONCLUSIONS: The dose to the kidneys can be attenuated while maintaining a 10% to 20% dose inhomogeneity in the lumbar spine area. Kidneys were localized more accurately using both US and CT imaging. With this technique, renal function has been preserved, and the study continues to enroll patients
    corecore